FAIR: Fair adversarial instance re-weighting

نویسندگان

چکیده

With growing awareness of societal impact artificial intelligence, fairness has become an important aspect machine learning algorithms. The issue is that human biases towards certain groups population, defined by sensitive features like race and gender, are introduced to the training data through collection labeling. Two directions ensuring research have focused on (i) instance weighting in order decrease more biased instances (ii) adversarial construct representations informative target variable, but uninformative attributes. In this paper we propose a Fair Adversarial Instance Re-weighting (FAIR) method, which uses learn function ensures fair predictions. Merging two paradigms, it inherits desirable properties from both interpretability reweighting end-to-end trainability training. We four different variants method and, among other things, demonstrate how can be cast fully probabilistic framework. Additionally, theoretical analysis FAIR models’ provided. compare models ten related state-of-the-art able achieve better trade-off between accuracy unfairness. To best our knowledge, first model merges approaches means provide interpretable information about individual instances.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Fair is fair: We must re-allocate livers for transplant

The 11 original regions for organ allocation in the United States were determined by proximity between hospitals that provided deceased donors and transplant programs. As liver transplants became more successful and demand rose, livers became a scarce resource. A national system has been implemented to prioritize liver allocation according to disease severity, but the system still operates with...

متن کامل

FAIR: Fair Audience InfeRence

Given the recent changes in the policy governing Internet content distribution, such as the institution of per listener royalties for Internet radio broadcasters, content distributors now have an incentive to under-report the size of their audience. Previous audience measurement schemes only protect against inflation of audience size. We present the first protocols for audience measurement that...

متن کامل

Fair Is Not Fair Everywhere.

Distributing the spoils of a joint enterprise on the basis of work contribution or relative productivity seems natural to the modern Western mind. But such notions of merit-based distributive justice may be culturally constructed norms that vary with the social and economic structure of a group. In the present research, we showed that children from three different cultures have very different i...

متن کامل

How Fair is Fair Trade?

This paper investigates to what extent fair trade programmes, are indeed ‘fair’. This is accomplished by comparing fair trade with free trade and protectionist trade regimes on their compliance of the criteria set by the fair trade movement itself. This comparison is made using comparative cost based and economies of scale models. It is found that whether or not fair trade is superior to free t...

متن کامل

On Fair Exchange, Fair Coins and Fair Sampling

We study various classical secure computation problems in the context of fairness, and relate them with each other. We also systematically study fair sampling problems (i.e., inputless functionalities) and discover three levels of complexity for them. Our results include the following: – Fair exchange cannot be securely reduced to the problem of fair cointossing by an r-round protocol, except w...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Neurocomputing

سال: 2022

ISSN: ['0925-2312', '1872-8286']

DOI: https://doi.org/10.1016/j.neucom.2021.12.082